13 research outputs found

    Automatic driver distraction detection using deep convolutional neural networks

    Get PDF
    Recently, the number of road accidents has been increased worldwide due to the distraction of the drivers. This rapid road crush often leads to injuries, loss of properties, even deaths of the people. Therefore, it is essential to monitor and analyze the driver's behavior during the driving time to detect the distraction and mitigate the number of road accident. To detect various kinds of behavior like- using cell phone, talking to others, eating, sleeping or lack of concentration during driving; machine learning/deep learning can play significant role. However, this process may need high computational capacity to train the model by huge number of training dataset. In this paper, we made an effort to develop CNN based method to detect distracted driver and identify the cause of distractions like talking, sleeping or eating by means of face and hand localization. Four architectures namely CNN, VGG-16, ResNet50 and MobileNetV2 have been adopted for transfer learning. To verify the effectiveness, the proposed model is trained with thousands of images from a publicly available dataset containing ten different postures or conditions of a distracted driver and analyzed the results using various performance metrics. The performance results showed that the pre-trained MobileNetV2 model has the best classification efficiency. © 2022 The Author(s

    Device agent assisted blockchain leveraged framework for Internet of Things

    Get PDF
    Blockchain (BC) is a burgeoning technology that has emerged as a promising solution to peer-to-peer communication security and privacy challenges. As a revolutionary technology, blockchain has drawn the attention of academics and researchers. Cryptocurrencies have already effectively utilized BC technology. Many researchers have sought to implement this technique in different sectors, including the Internet of Things. To store and manage IoT data, we present in this paper a lightweight BC-based architecture with a modified raft algorithm-based consensus protocol. We designed a Device Agent that executes a novel registration procedure to connect IoT devices to the blockchain. We implemented the framework on Docker using the Go programming language. We have simulated the framework on a Linux environment hosted in the cloud. We have conducted a detailed performance analysis using a variety of measures. The results demonstrate that our suggested solution is suitable for facilitating the management of IoT data with increased security and privacy. In terms of throughput and block generation time, the results indicate that our solution might be 40% to 45% faster than the existing blockchain. © 2013 IEEE

    A Dependable Hybrid Machine Learning Model for Network Intrusion Detection

    Full text link
    Network intrusion detection systems (NIDSs) play an important role in computer network security. There are several detection mechanisms where anomaly-based automated detection outperforms others significantly. Amid the sophistication and growing number of attacks, dealing with large amounts of data is a recognized issue in the development of anomaly-based NIDS. However, do current models meet the needs of today's networks in terms of required accuracy and dependability? In this research, we propose a new hybrid model that combines machine learning and deep learning to increase detection rates while securing dependability. Our proposed method ensures efficient pre-processing by combining SMOTE for data balancing and XGBoost for feature selection. We compared our developed method to various machine learning and deep learning algorithms to find a more efficient algorithm to implement in the pipeline. Furthermore, we chose the most effective model for network intrusion based on a set of benchmarked performance analysis criteria. Our method produces excellent results when tested on two datasets, KDDCUP'99 and CIC-MalMem-2022, with an accuracy of 99.99% and 100% for KDDCUP'99 and CIC-MalMem-2022, respectively, and no overfitting or Type-1 and Type-2 issues.Comment: Accepted in the Journal of Information Security and Applications (Scopus, Web of Science (SCIE) Journal, Quartile: Q1, Site Score: 7.6, Impact Factor: 4.96) on 7 December 202

    Cyberbullying detection on social networks using machine learning approaches

    No full text
    The use of social media has grown exponentially over time with the growth of the Internet and has become the most influential networking platform in the 21st century. However, the enhancement of social connectivity often creates negative impacts on society that contribute to a couple of bad phenomena such as online abuse, harassment cyberbullying, cybercrime and online trolling. Cyberbullying frequently leads to serious mental and physical distress, particularly for women and children, and even sometimes force them to attempt suicide. Online harassment attracts attention due to its strong negative social impact. Many incidents have recently occurred worldwide due to online harassment, such as sharing private chats, rumours, and sexual remarks. Therefore, the identification of bullying text or message on social media has gained a growing amount of attention among researchers. The purpose of this research is to design and develop an effective technique to detect online abusive and bullying messages by merging natural language processing and machine learning. Two distinct freatures, namely Bag-of Words (BoW) and term frequency-inverse text frequency (TFIDF), are used to analyse the accuracy level of four distinct machine learning algorithms. © 2020 IEEE

    Link prediction by correlation on social network

    No full text
    In a social network, the topology of the network grows through the formation of the link. the connection between two nodes in a social network indicates a confidence in terms of the similarity of some activities. Generally, a new link in the social network is created from different perspectives such as familiarity, cohesiveness, geographical locations etc. The concept of the link in the social network has been utilized to discover the hidden meaning of different fields such as e-commerce, bioinformatics and information retrieval. The prediction of a new link between two nodes in the social network is normally accomplished based on the nature of the topology and the similarity function among the nodes is defined with the help of the number of common friends. In this paper, we propose two link prediction algorithms: Local Link Prediction Algorithm and Global Link prediction by taking into consideration of user's activities as well as the common friends. We apply two formulas called correlation based cScore and influential score based iScore to measure the similarity between the two predicted nodes. Finally, we analyze the performance of the proposed algorithms by using DBLP, PPI, PB, and USAir data sets and the experimental result attests that our link predicted algorithm outperforms over the existing algorithms

    Md. Manowarul Islam et al. / International Journal of Computer Science Engineering (IJCSE) Energy Efficient Routing Mechanism for

    No full text
    Abstract—A mobile ad hoc network (MANET) includes a collection of self-organizing mobile nodes without any fixed physical infrastructures. Most of the nodes are small in size with limited battery energy. The dynamic network topology, limited energy and bandwidth constrain make routing is a critical problem. An efficient routing protocol can ensure high performance by increasing network lifetime. To make the network more efficient the routing protocol should ensure maximum use of the network resources like limited residual energy, bandwidth, etc. In this paper, we propose a routing approach called Energy Efficient Routing Mechanism for Mobile Ad Hoc Networks (EER), where a source node is capable of discovering an energy efficient route to its desired destination node. The propose EER mechanism enhances intelligence to mobile nodes and creates and fires fuzzy rules to develop a new route during the discovery phase. By taking into account, nodal energy and current queue status, our mechanism applies fuzzy based rules to develop the route which provides higher life longevity and improves network performance significantly. The performance result shows the EER mechanism outperforms the existing protocols which are simulated in Network Simulator-2(NS-2)

    Machine learning based diabetes prediction and development of smart web application

    No full text
    Diabetes is a very common disease affecting individuals worldwide. Diabetes increases the risk of long-term complications including heart disease, and kidney failure among others. People might live longer and lead healthier lives if this disease is detected early. Different supervised machine learning models trained with appropriate datasets can aid in diagnosing the diabetes at the primary stage. The goal of this work is to find effective machine-learning-based classifier models for detecting diabetes in individuals utilizing clinical data. The machine learning algorithms to be trained with several datasets in this article include Decision tree (DT), Naive Bayes (NB), k-nearest neighbor (KNN), Random Forest (RF), Gradient Boosting (GB), Logistic Regression (LR) and Support Vector Machine (SVM). We have applied efficient pre-processing techniques including label-encoding and normalization that improve the accuracy of the models. Further, using various feature selection approaches, we have identified and prioritized a number of risk factors. Extensive experiments have been conducted to analyze the performance of the model using two different datasets. Our model is compared with some recent study and the results show that the proposed model can provide better accuracy of 2.71% to 13.13% depending on the dataset and the adopted ML algorithm. Finally, a machine learning algorithm showing the highest accuracy is selected for further development. We integrate this model in a web application using python flask web development framework. The results of this study suggest that an appropriate preprocessing pipeline on clinical data and applying ML-based classification may predict diabetes accurately and efficiently

    DTLCx: An Improved ResNet Architecture to Classify Normal and Conventional Pneumonia Cases from COVID-19 Instances with Grad-CAM-Based Superimposed Visualization Utilizing Chest X-ray Images

    No full text
    COVID-19 is a severe respiratory contagious disease that has now spread all over the world. COVID-19 has terribly impacted public health, daily lives and the global economy. Although some developed countries have advanced well in detecting and bearing this coronavirus, most developing countries are having difficulty in detecting COVID-19 cases for the mass population. In many countries, there is a scarcity of COVID-19 testing kits and other resources due to the increasing rate of COVID-19 infections. Therefore, this deficit of testing resources and the increasing figure of daily cases encouraged us to improve a deep learning model to aid clinicians, radiologists and provide timely assistance to patients. In this article, an efficient deep learning-based model to detect COVID-19 cases that utilizes a chest X-ray images dataset has been proposed and investigated. The proposed model is developed based on ResNet50V2 architecture. The base architecture of ResNet50V2 is concatenated with six extra layers to make the model more robust and efficient. Finally, a Grad-CAM-based discriminative localization is used to readily interpret the detection of radiological images. Two datasets were gathered from different sources that are publicly available with class labels: normal, confirmed COVID-19, bacterial pneumonia and viral pneumonia cases. Our proposed model obtained a comprehensive accuracy of 99.51% for four-class cases (COVID-19/normal/bacterial pneumonia/viral pneumonia) on Dataset-2, 96.52% for the cases with three classes (normal/ COVID-19/bacterial pneumonia) and 99.13% for the cases with two classes (COVID-19/normal) on Dataset-1. The accuracy level of the proposed model might motivate radiologists to rapidly detect and diagnose COVID-19 cases
    corecore